Discussion about this post

User's avatar
neuro morph's avatar

As a an AI Safety researcher who is freaked about about catatrophic risks (e.g. bioweapons) from AI (sometimes to the point of rudeness, sorry): this is the best take on AI regulation I've yet seen, by far.

You really hit the nail on the head with,

"But how do we regulate an industrial revolution? How do we regulate an era?

There is no way to pass “a law,” or a set of laws, to control an industrial revolution. "

The Narrow Path essay and Max Tegmark's Hopium essay both seem to suggest that, because the AI industrial revolution is scary, we should ask the US Federal government to wave a magic wand and make it not happen. That simply isn't an option on the table, and any realistic plan must start by facing up to that.

Rather than trying to tell people not to use AI for science or AI for improving AI, we should aim to channel it. Offer rewards (e.g. subsidized compute) for researchers in exchange for operating in a loosely supervised setting. Focus on preventing only the very worst civilization-scale risks rather than micromanaging. Trying to reign in developers with overly stringent rules will just drive research underground. We can't stop the tide of technology.

Expand full comment
Thomas's avatar

Hi Dean, While this is not my area, I'm glad I subscribed a bit ago as I'm finding your writing very interesting. Hope you have a nice holiday season!

Expand full comment
17 more comments...

No posts